Intro To Tensorflow

TensorFlow - A fast Introduction

https://www.tensorflow.org


An open source library for numerical computation using data flow graphs

Okay, so what is it?

Hmm, In short, it’s a wonderful library provided by google for general purpose Machine Learning problems, specially with deep learning. Problems could be a Visual Recognition, Machine Translation , Natural Language Processing (NLP) or even simple linear regression. By the way, it have made the researchers/ developers life easy by providing tons of pre-defined packages or high level utility programs , like tensorboard.

Alright, let’s get to know tensorflow and how is it different from normal NumPy library. (if you already doing too much python, and cannot think anything other than numpy and pandas)

Concept

Tensorflow expresses a numeric computation as a graph

Like a network graph, where each graph Nodes are Operations which can have any numbers of inputs and outputs.

And The edges between the nodes are tensors which flows between nodes.

What is Tensor?

Just a quick refresh,

  • Vectors – 1d array of numbers, default is column vector
  • Matrices – 2d array of numbers, represented in rows and columns.
  • Tensors – nd array of numbers, in cases where more than two axes is required to represent the data.

Variables

Variables in tensorflow are stateful nodes which can hold the current value and can be modified across the multiple executions of a graph.

it can be treated as parameters, whose values could change during execution. For example, while performing any optimization , like with gradient descent , the Weights and biases can be treated as Variables, as their values would be changing in each iterations.

Also, the variables values remains in the graph, so once the model is build, it could be passed around without carrying the original dataset.

Placeholders

Placeholders in tensorflow whose values are fed at time of execution. Unlike variables, the placeholder’s values do not change by the algorithm.

For example , All the inputs (Xs) or labels (Yx) in case of classification problem and during optimization, can be considered as Placeholders.

Operations

Mathematical Operations like Add, Multiply or MatMul or Relu etc are the nodes in tensorflow which takes input(s) , perform the numerical operations and generates the output.

For Example,

MatMul : Multiply two matrix nodes, (always consider the shape of the matrices, eg: mxn * nxp = mxp, good practice would be to comment the line of expression, as shapes of matrices or vectors are being multiplied and expected shape)

ReLU : Activation function , which performs elementwise rectified linear function

the equation could be,

h(x) = ReLU(Wx + b)

where, W is the matices of Weights, x is vector of inputs and b is the vector of biases.

Let’s see it in action.

Example

Let’s start with something simple, and what could be more simpler than a Linear Regression example.

import numpy as np
import tensorflow as tf

# Graph Input

X = tf.placeholder(tf.float32, shape=(None,), name="X")
y = tf.placeholder(tf.float32, shape=(None,), name="y")

Here we defined, x and y , two placeholders which can holds Inputs. Also note that the second argument shape=(None,) explains that these variables accepts 1-dimentional values of dynamic size. We can use None value to allow any number of batch sizes.

# Set model weights

W = tf.Variable(np.random.normal(), name="weight")
b = tf.Variable(np.random.randn(), name="bias")

The above lines declares Weights and bias variables as W and b. Note, that the name has been provided to create a named variable, which could be easy to trace the variables in error , or debug logs or even in tensorboard also. so good practice is to have your variables named.

# Linear model

y_pred = tf.add(tf.multiply(X, W), b)

Here, add and multiply operations has been used , from tf library and are similar to the numpy operations.

# Cost / Loss / Objective function - Mean square error

cost = tf.reduce_mean(tf.square(y_pred - y))

Taking the reduced mean, of the residue

Now we can define the Optimizer, we can use Gradient Descent Optimizer, you can use any optimizer, tensorflow provides several options for prebuild optimizers . A good resource of optimizers could be found here. (http://ruder.io/optimizing-gradient-descent/)

#randomly generating data
x_batch = np.linspace(-1, 1, 101)
y_batch = 2 * x_batch + np.random.randn(*x_batch.shape) * 0.3

lr = 0.1
optimizer = tf.train.GradientDescentOptimizer(lr).minimize(loss)

here, lr is the learning rate, hyper parameter. Generally keeping learning rate small, is beneficial, as it avoids overshoot of minima. (too small would lead to very slow convergence also). Gradient here are calculated using backpropagation. (http://cs231n.github.io/optimization-2/) can be referred to understand backpropagation easily.


init = tf.global_variables_initializer()
with tf.Session() as session:
    session.run(init)

    feed_dict = {X: x_batch, y: y_batch}
    for _ in range(30):
        loss_val, _ = session.run([cost, optimizer], feed_dict)
        print("loss:", loss_val)

    y_pred_batch = session.run(y_pred, {x: x_batch})

Note, the first line of tf.global_variables_initializer() is used to initialize all variables, As all variables in tensorflow need to be initialized before execution. this method is easy way to initialize all at once.

In Tensorflow, the graph is not executed unless the sess.run method is called. the sess.run(fetches, feeds) methods expects two parameters.

Fetches : List of graph nodes which are expected to be return the output value. Feeds : Dictionary mapping for placeholders, which need to be provided at runtime execution.

The with clause here with tf.session as session starts the tensorflow scope, in which the current session could be used to execute the tensorflow operation graph.

Congratulations!!! you got your tensorflow introduction.

Where to go from here : follow few more examples of tensorflow here (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/learn) , these are excellent resources.

Thanks. ;)

comments powered by Disqus