<label id="jgr5k"></label>
    <legend id="jgr5k"><track id="jgr5k"></track></legend>

    <sub id="jgr5k"></sub>
  1. <u id="jgr5k"></u>
      久草国产视频,91资源总站,在线免费看AV,丁香婷婷社区,久久精品99久久久久久久久,色天使av,无码探花,香蕉av在线
      您正在使用IE低版瀏覽器,為了您的雷峰網賬號安全和更好的產品體驗,強烈建議使用更快更安全的瀏覽器
      此為臨時鏈接,僅用于文章預覽,將在時失效
      人工智能開發者 正文
      發私信給汪思穎
      發送

      0

      如何利用微信監管你的TF訓練

      本文作者: 汪思穎 2017-09-28 14:57
      導語:TensorFlow新玩法,利用微信來監管你的訓練

      雷鋒網 AI科技評論按:本文作者Coldwings,雷鋒網 AI科技評論獲其授權發布。

      之前回答問題【在機器學習模型的訓練期間,大概幾十分鐘到幾小時不等,大家都會在等實驗的時候做什么?】的時候,說到可以用微信來管著訓練,完全不用守著。沒想到這么受歡迎……

      原問題下的回答如下

      不知道有哪些朋友是在TF/keras/chainer/mxnet等框架下用python擼的….…

      這可是python啊……上itchat,弄個微信號加自己為好友(或者自己發自己),訓練進展跟著一路發消息給自己就好了,做了可視化的話順便把圖也一并發過來。

      然后就能安心睡覺/逛街/泡妞/寫答案了。

      講道理,甚至簡單的參數調整都可以照著用手機來……

      大體效果如下

      如何利用微信監管你的TF訓練

      如何利用微信監管你的TF訓練

      當然可以做得更全面一些。最可靠的辦法自然是干脆地做一個http服務或者一個rpc,然而這樣往往太麻煩。本著簡單高效的原則,幾行代碼能起到效果方便自己當然是最好的,接入微信或者web真就是不錯的選擇了。只是查看的話,TensorBoard就很好,但是如果想加入一些自定義操作,還是自行定制的。echat.js做成web,或者itchat做個微信服務,都是挺不賴的選擇。         

      正文如下

      這里折騰一個例子。以TensorFlow的example中,利用CNN處理MNIST的程序為例,我們做一點點小小的修改。

      首先這里放上寫完的代碼:

      #!/usr/bin/env python

      # coding: utf-8


      '''

      A Convolutional Network implementation example using TensorFlow library.

      This example is using the MNIST database of handwritten digits

      (http://yann.lecun.com/exdb/mnist/)

      Author: Aymeric Damien

      Project: https://github.com/aymericdamien/TensorFlow-Examples/



      Add a itchat controller with multi thread

      '''


      from __future__ import print_function


      import tensorflow as tf


      # Import MNIST data

      from tensorflow.examples.tutorials.mnist import input_data


      # Import itchat & threading

      import itchat

      import threading


      # Create a running status flag

      lock = threading.Lock()

      running = False


      # Parameters

      learning_rate = 0.001

      training_iters = 200000

      batch_size = 128

      display_step = 10


      def nn_train(wechat_name, param):

         global lock, running
         # Lock
         with lock:
             running = True

         # mnist data reading
         mnist = input_data.read_data_sets("data/", one_hot=True)

         # Parameters
         # learning_rate = 0.001
         # training_iters = 200000
         # batch_size = 128
         # display_step = 10
         learning_rate, training_iters, batch_size, display_step = param

         # Network Parameters
         n_input = 784 # MNIST data input (img shape: 28*28)
         n_classes = 10 # MNIST total classes (0-9 digits)
         dropout = 0.75 # Dropout, probability to keep units

         # tf Graph input
         x = tf.placeholder(tf.float32, [None, n_input])
         y = tf.placeholder(tf.float32, [None, n_classes])
         keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)


         # Create some wrappers for simplicity
         def conv2d(x, W, b, strides=1):
             # Conv2D wrapper, with bias and relu activation
             x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
             x = tf.nn.bias_add(x, b)
             return tf.nn.relu(x)


         def maxpool2d(x, k=2):
             # MaxPool2D wrapper
             return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                                 padding='SAME')


         # Create model
         def conv_net(x, weights, biases, dropout):
             # Reshape input picture
             x = tf.reshape(x, shape=[-1, 28, 28, 1])

             # Convolution Layer
             conv1 = conv2d(x, weights['wc1'], biases['bc1'])
             # Max Pooling (down-sampling)
             conv1 = maxpool2d(conv1, k=2)

             # Convolution Layer
             conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
             # Max Pooling (down-sampling)
             conv2 = maxpool2d(conv2, k=2)

             # Fully connected layer
             # Reshape conv2 output to fit fully connected layer input
             fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
             fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
             fc1 = tf.nn.relu(fc1)
             # Apply Dropout
             fc1 = tf.nn.dropout(fc1, dropout)

             # Output, class prediction
             out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
             return out

         # Store layers weight & bias
         weights = {
             # 5x5 conv, 1 input, 32 outputs
             'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
             # 5x5 conv, 32 inputs, 64 outputs
             'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
             # fully connected, 7*7*64 inputs, 1024 outputs
             'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
             # 1024 inputs, 10 outputs (class prediction)
             'out': tf.Variable(tf.random_normal([1024, n_classes]))
         }

         biases = {
             'bc1': tf.Variable(tf.random_normal([32])),
             'bc2': tf.Variable(tf.random_normal([64])),
             'bd1': tf.Variable(tf.random_normal([1024])),
             'out': tf.Variable(tf.random_normal([n_classes]))
         }

         # Construct model
         pred = conv_net(x, weights, biases, keep_prob)

         # Define loss and optimizer
         cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
         optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

         # Evaluate model
         correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
         accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))


         # Initializing the variables
         init = tf.global_variables_initializer()

         # Launch the graph
         with tf.Session() as sess:
             sess.run(init)
             step = 1
             # Keep training until reach max iterations
             print('Wait for lock')
             with lock:
                 run_state = running
             print('Start')
             while step * batch_size < training_iters and run_state:
                 batch_x, batch_y = mnist.train.next_batch(batch_size)
                 # Run optimization op (backprop)
                 sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
                                             keep_prob: dropout})
                 if step % display_step == 0:
                     # Calculate batch loss and accuracy
                     loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                                     y: batch_y,
                                                                     keep_prob: 1.})
                     print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \ 

                          "{:.6f}".format(loss) + ", Training Accuracy= " + \

                          "{:.5f}".format(acc))
                     itchat.send("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \

                          "{:.6f}".format(loss) + ", Training Accuracy= " + \

                                  "{:.5f}".format(acc), wechat_name)
                 step += 1
                 with lock:
                     run_state = running
             print("Optimization Finished!")
             itchat.send("Optimization Finished!", wechat_name)

             # Calculate accuracy for 256 mnist test images
             print("Testing Accuracy:", \

                  sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                             keep_prob: 1.}))
             itchat.send("Testing Accuracy: %s" %
                 sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                               keep_prob: 1.}), wechat_name)

         with lock:
             running = False


      @itchat.msg_register([itchat.content.TEXT])

      def chat_trigger(msg):
         global lock, running, learning_rate, training_iters, batch_size, display_step
         if msg['Text'] == u'開始':
             print('Starting')
             with lock:
                 run_state = running
             if not run_state:
                 try:
                     threading.Thread(target=nn_train, args=(msg['FromUserName'], (learning_rate, training_iters, batch_size, display_step))).start()
                 except:
                     msg.reply('Running')
         elif msg['Text'] == u'停止':
             print('Stopping')
             with lock:
                 running = False
         elif msg['Text'] == u'參數':
             itchat.send('lr=%f, ti=%d, bs=%d, ds=%d'%(learning_rate, training_iters, batch_size, display_step),msg['FromUserName'])
         else:
             try:
                 param = msg['Text'].split()
                 key, value = param
                 print(key, value)
                 if key == 'lr':
                     learning_rate = float(value)
                 elif key == 'ti':
                     training_iters = int(value)
                 elif key == 'bs':
                     batch_size = int(value)
                 elif key == 'ds':
                     display_step = int(value)
             except:
                 pass



      if __name__ == '__main__':
         itchat.auto_login(hotReload=True)
         itchat.run()

      這段代碼里面,我所做的修改主要是:

      0.導入了itchat和threading

      1. 把原本的腳本里網絡構成和訓練的部分甩到了一個函數nn_train里

      def nn_train(wechat_name, param):
         global lock, running
         # Lock
         with lock:
             running = True

         # mnist data reading
         mnist = input_data.read_data_sets("data/", one_hot=True)

         # Parameters
         # learning_rate = 0.001
         # training_iters = 200000
         # batch_size = 128
         # display_step = 10
         learning_rate, training_iters, batch_size, display_step = param

         # Network Parameters
         n_input = 784 # MNIST data input (img shape: 28*28)
         n_classes = 10 # MNIST total classes (0-9 digits)
         dropout = 0.75 # Dropout, probability to keep units

         # tf Graph input
         x = tf.placeholder(tf.float32, [None, n_input])
         y = tf.placeholder(tf.float32, [None, n_classes])
         keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)


         # Create some wrappers for simplicity
         def conv2d(x, W, b, strides=1):
             # Conv2D wrapper, with bias and relu activation
             x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
             x = tf.nn.bias_add(x, b)
             return tf.nn.relu(x)


         def maxpool2d(x, k=2):
             # MaxPool2D wrapper
             return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                                 padding='SAME')


         # Create model
         def conv_net(x, weights, biases, dropout):
             # Reshape input picture
             x = tf.reshape(x, shape=[-1, 28, 28, 1])

             # Convolution Layer
             conv1 = conv2d(x, weights['wc1'], biases['bc1'])
             # Max Pooling (down-sampling)
             conv1 = maxpool2d(conv1, k=2)

             # Convolution Layer
             conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
             # Max Pooling (down-sampling)
             conv2 = maxpool2d(conv2, k=2)

             # Fully connected layer
             # Reshape conv2 output to fit fully connected layer input
             fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
             fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
             fc1 = tf.nn.relu(fc1)
             # Apply Dropout
             fc1 = tf.nn.dropout(fc1, dropout)

             # Output, class prediction
             out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
             return out

         # Store layers weight & bias
         weights = {
             # 5x5 conv, 1 input, 32 outputs
             'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
             # 5x5 conv, 32 inputs, 64 outputs
             'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
             # fully connected, 7*7*64 inputs, 1024 outputs
             'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
             # 1024 inputs, 10 outputs (class prediction)
             'out': tf.Variable(tf.random_normal([1024, n_classes]))
         }

         biases = {
             'bc1': tf.Variable(tf.random_normal([32])),
             'bc2': tf.Variable(tf.random_normal([64])),
             'bd1': tf.Variable(tf.random_normal([1024])),
             'out': tf.Variable(tf.random_normal([n_classes]))
         }

         # Construct model
         pred = conv_net(x, weights, biases, keep_prob)

         # Define loss and optimizer
         cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
         optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

         # Evaluate model
         correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
         accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))


         # Initializing the variables
         init = tf.global_variables_initializer()

         # Launch the graph
         with tf.Session() as sess:
             sess.run(init)
             step = 1
             # Keep training until reach max iterations
             print('Wait for lock')
             with lock:
                 run_state = running
             print('Start')
             while step * batch_size < training_iters and run_state:
                 batch_x, batch_y = mnist.train.next_batch(batch_size)
                 # Run optimization op (backprop)
                 sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
                                             keep_prob: dropout})
                 if step % display_step == 0:
                     # Calculate batch loss and accuracy
                     loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                                     y: batch_y,
                                                                     keep_prob: 1.})
                     print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \

                          "{:.6f}".format(loss) + ", Training Accuracy= " + \

                          "{:.5f}".format(acc))
                     itchat.send("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \

                          "{:.6f}".format(loss) + ", Training Accuracy= " + \

                                  "{:.5f}".format(acc), wechat_name)
                 step += 1
                 with lock:
                     run_state = running
             print("Optimization Finished!")
             itchat.send("Optimization Finished!", wechat_name)

             # Calculate accuracy for 256 mnist test images
             print("Testing Accuracy:", \

                  sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                             keep_prob: 1.}))
             itchat.send("Testing Accuracy: %s" %
                 sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                               keep_prob: 1.}), wechat_name)

         with lock:
             running = False

      這里大部分是跟原本的代碼一樣的,不過首先所有print的地方都加了個itchat.send來輸出日志,此外加了個帶鎖的狀態量running用來做運行開關。此外,部分參數是通過函數參數傳入的。

      然后呢,寫了個itchat的handler

      @itchat.msg_register([itchat.content.TEXT])

      def chat_trigger(msg):
         global lock, running, learning_rate, training_iters, batch_size, display_step
         if msg['Text'] == u'開始':
             print('Starting')
             with lock:
                 run_state = running
             if not run_state:
                 try:
                     threading.Thread(target=nn_train, args=(msg['FromUserName'], (learning_rate, training_iters, batch_size, display_step))).start()
                 except:
                     msg.reply('Running')

      作用是,如果收到微信消息,內容為『開始』,那就跑訓練的函數(當然,為了防止阻塞,放在了另一個線程里)

      最后再在腳本主流程里使用itchat登錄微信并且啟動itchat的服務,這樣就實現了基本的控制。

      if __name__ == '__main__':
         itchat.auto_login(hotReload=True)
         itchat.run()

      但是我們不滿足于此,我還希望可以對流程進行一些控制,對參數進行一些修改,于是乎:

      @itchat.msg_register([itchat.content.TEXT])

      def chat_trigger(msg):
         global lock, running, learning_rate, training_iters, batch_size, display_step
         if msg['Text'] == u'開始':
             print('Starting')
             with lock:
                 run_state = running
             if not run_state:
                 try:
                     threading.Thread(target=nn_train, args=(msg['FromUserName'], (learning_rate, training_iters, batch_size, display_step))).start()
                 except:
                     msg.reply('Running')
         elif msg['Text'] == u'停止':
             print('Stopping')
             with lock:
                 running = False
         elif msg['Text'] == u'參數':
             itchat.send('lr=%f, ti=%d, bs=%d, ds=%d'%(learning_rate, training_iters, batch_size, display_step),msg['FromUserName'])
         else:
             try:
                 param = msg['Text'].split()
                 key, value = param
                 print(key, value)
                 if key == 'lr':
                     learning_rate = float(value)
                 elif key == 'ti':
                     training_iters = int(value)
                 elif key == 'bs':
                     batch_size = int(value)
                 elif key == 'ds':
                     display_step = int(value)
             except:
                 pass

      通過這個,我們可以在epoch中途停止(因為nn_train里通過檢查running標志來確定是否需要停下來),也可以在訓練開始前調整learning_rate等幾個參數。

      實在是很簡單……

      雷峰網版權文章,未經授權禁止轉載。詳情見轉載須知

      如何利用微信監管你的TF訓練

      分享:
      相關文章

      編輯

      關注AI學術,例如論文
      當月熱門文章
      最新文章
      請填寫申請人資料
      姓名
      電話
      郵箱
      微信號
      作品鏈接
      個人簡介
      為了您的賬戶安全,請驗證郵箱
      您的郵箱還未驗證,完成可獲20積分喲!
      請驗證您的郵箱
      立即驗證
      完善賬號信息
      您的賬號已經綁定,現在您可以設置密碼以方便用郵箱登錄
      立即設置 以后再說
      主站蜘蛛池模板: 亚洲无码性爱| 中国漂亮护士一级毛片| 精品无码一区二区三区在线| yellow网站在线观看| 精品人妻无码中文字幕| 国产精品午夜福利在线观看| 中文有码亚洲制服av片| 国产裸体舞一区二区三区| 普定县| 中文无码av一区二区三区| 中文字幕在线一区| 喷潮出白浆视频在线观看| 久久亚洲精品日本波多野结衣| 国产精品VA在线观看无码不卡 | 国产精品兄妹在线观看麻豆| 国产97色在线 | 日韩| 济宁市| 国产欧美日韩综合精品二区| 亚洲无码久久| 亚洲综合伊人| 性视频网址| 一二三四中文字幕日韩乱码| 久久无码中文字幕免费影院蜜桃| 狠狠做久久深爱婷婷| 国产乱人伦偷精品视频下| 日韩欧美人妻一区二区三区| 亚洲狠狠婷婷综合久久久久图片| 亚洲自偷自偷在线成人网站传媒| 精品久久国产| 97亚洲色欲色欲综合网| 久草国产视频| 果冻传媒18禁免费视频| 九九热在线免费播放视频| 熟妇人妻午夜寂寞影院| 香蕉人妻av久久久久天天| 一区二区三区免费福利| 99草草国产熟女视频在线| 国产高清精品在线二区| 日韩欧美人妻一区二区三区| 91乱伦视频| 国产成人a在线观看视频免费|