Phil 3.1.17

It’s March and no new wars! Hooray!

7:00 – 8:00 Research

8:30 – 4:30 BRC

  • More TensorFlow
    • MNIST tutorial – clear, but a LOT of stuff
    • Neural Networks and Deep Learning is an online book referenced in the TF documentation (at least the softmax chapter)
    • A one-hot vector is a vector which is 0 in most dimensions, and 1 in a single dimension. In this case, the nth digit will be represented as a vector which is 1 in the nth dimension. For example, 3 would be [0,0,0,1,0,0,0,0,0,0]. Consequently, mnist.train.labels is a [55000, 10] array of floats.
    • If you want to assign probabilities to an object being one of several different things, softmax is the thing to do, because softmax gives us a list of values between 0 and 1 that add up to 1. Even later on, when we train more sophisticated models, the final step will be a layer of softmax.
    • x = tf.placeholder(tf.float32, [None, 784])

      We represent this as a 2-D tensor of floating-point numbers, with a shape [None, 784]. (Here None means that a dimension can be of any length.)

    • A good explanation of cross-entropy, apparently.
    • tf.reduce_mean
    • Success!!! Here’s the code:
      import tensorflow as tf
      from tensorflow.examples.tutorials.mnist import input_data
      mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
      x = tf.placeholder(tf.float32, [None, 784])
      W = tf.Variable(tf.zeros([784, 10]))
      b = tf.Variable(tf.zeros([10]))
      y = tf.nn.softmax(tf.matmul(x, W) + b)
      y_ = tf.placeholder(tf.float32, [None, 10]) #note that y_ means 'y prme'
      cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
      train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
      sess = tf.InteractiveSession()
      for _ in range(1000):
          batch_xs, batch_ys = mnist.train.next_batch(100)
, feed_dict={x: batch_xs, y_: batch_ys})
      correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
      print(, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
    • And here are the results:
      C:\Users\philip.feldman\AppData\Local\Programs\Python\Python35\python.exe C:/Development/Sandboxes/TensorflowPlayground/HelloPackage/
      Extracting MNIST_data/train-images-idx3-ubyte.gz
      Extracting MNIST_data/train-labels-idx1-ubyte.gz
      Extracting MNIST_data/t10k-images-idx3-ubyte.gz
      Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
    • Working on the advanced tutorial. Fixed to work with local data.
    • And then my brain died

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: